A multi-layer perceptron is an early multi-layer neural network architecture that had precisely three layers: input, hidden and output. The combination of sigmoid threshold functions and backpropagation enabled the inner hidden layer to learn weights, thus beginning the modern field of neural networks.
Used in Chap. 6: pages 83, 84, 86, 87, 90; Chap. 7: page 107
A multi-layer perceptron architecture.
A simple multi-layer perceptron to solve the XOR problem.